8 research outputs found
Proximal Diagonal Newton Methods for Composite Optimization Problems
This paper proposes new proximal Newton-type methods with a diagonal metric
for solving composite optimization problems whose objective function is the sum
of a twice continuously differentiable function and a proper closed
directionally differentiable function. Although proximal Newton-type methods
using diagonal metrics have been shown to be superior to the proximal gradient
method numerically, no theoretical results have been obtained to suggest this
superiority. Even though our proposed method is based on a simple idea, its
convergence rate suggests an advantage over the proximal gradient method in
certain situations. Numerical experiments show that our proposed algorithms are
effective, especially in the nonconvex case
Inexact proximal DC Newton-type method for nonconvex composite functions
We consider a class of difference-of-convex (DC) optimization problems where
the objective function is the sum of a smooth function and a possible nonsmooth
DC function. The application of proximal DC algorithms to address this problem
class is well-known. In this paper, we combine a proximal DC algorithm with an
inexact proximal Newton-type method to propose an inexact proximal DC
Newton-type method. We demonstrate global convergence properties of the
proposed method. In addition, we give a memoryless quasi-Newton matrix for
scaled proximal mappings and consider a two-dimensional system of semi-smooth
equations that arise in calculating scaled proximal mappings. To efficiently
obtain the scaled proximal mappings, we adopt a semi-smooth Newton method to
inexactly solve the system. Finally, we present some numerical experiments to
investigate the efficiency of the proposed method, showing that the proposed
method outperforms existing methods
大規模な無制約最適化問題に対するメモリーレス準ニュートン法
東京理科大学201